Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32.716
Filtrar
1.
Nat Commun ; 15(1): 3093, 2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38600118

RESUMO

Sensory-motor interactions in the auditory system play an important role in vocal self-monitoring and control. These result from top-down corollary discharges, relaying predictions about vocal timing and acoustics. Recent evidence suggests such signals may be two distinct processes, one suppressing neural activity during vocalization and another enhancing sensitivity to sensory feedback, rather than a single mechanism. Single-neuron recordings have been unable to disambiguate due to overlap of motor signals with sensory inputs. Here, we sought to disentangle these processes in marmoset auditory cortex during production of multi-phrased 'twitter' vocalizations. Temporal responses revealed two timescales of vocal suppression: temporally-precise phasic suppression during phrases and sustained tonic suppression. Both components were present within individual neurons, however, phasic suppression presented broadly regardless of frequency tuning (gating), while tonic was selective for vocal frequencies and feedback (prediction). This suggests that auditory cortex is modulated by concurrent corollary discharges during vocalization, with different computational mechanisms.


Assuntos
Córtex Auditivo , Animais , Córtex Auditivo/fisiologia , Neurônios/fisiologia , Retroalimentação Sensorial/fisiologia , Retroalimentação , Callithrix/fisiologia , Vocalização Animal/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica
2.
Sci Rep ; 14(1): 9485, 2024 04 25.
Artigo em Inglês | MEDLINE | ID: mdl-38664478

RESUMO

Across two online experiments, this study explored the effect of preferred background music on attentional state and performance, as well as on mood and arousal, during a vigilance task. It extended recent laboratory findings-showing an increase in task-focus and decrease in mind-wandering states with music-to environments with more distractions around participants. Participants-people who normally listen to background music during attention-demanding tasks-completed the vigilance task in their homes both with and without their chosen music and reported their attentional state, subjective arousal, and mood valence throughout the task. Experiment 1 compared music to relative silence and Experiment 2 compared music against the backdrop of continuous noise to continuous noise alone. In both experiments, music decreased mind-wandering and increased task-focus. Unlike in previous laboratory studies, in both experiments music also led to faster reaction times while increasing low-arousal external-distraction states. Importantly, mood and arousal increased with music and were shown to mediate its effects on reaction time and for the first time attentional state, both separately and together. Serial mediation effects were mostly confined to models where mood was entered first and arousal second and were consistent with the mood-arousal account of the impact of background music listening.


Assuntos
Afeto , Nível de Alerta , Atenção , Música , Tempo de Reação , Humanos , Música/psicologia , Atenção/fisiologia , Afeto/fisiologia , Nível de Alerta/fisiologia , Feminino , Masculino , Adulto , Tempo de Reação/fisiologia , Adulto Jovem , Percepção Auditiva/fisiologia , Adolescente , Análise e Desempenho de Tarefas
3.
Cell Rep ; 43(4): 114081, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38581682

RESUMO

Narratives can synchronize neural and physiological signals between individuals, but the relationship between these signals, and the underlying mechanism, is unclear. We hypothesized a top-down effect of cognition on arousal and predicted that auditory narratives will drive not only brain signals but also peripheral physiological signals. We find that auditory narratives entrained gaze variation, saccade initiation, pupil size, and heart rate. This is consistent with a top-down effect of cognition on autonomic function. We also hypothesized a bottom-up effect, whereby autonomic physiology affects arousal. Controlled breathing affected pupil size, and heart rate was entrained by controlled saccades. Additionally, fluctuations in heart rate preceded fluctuations of pupil size and brain signals. Gaze variation, pupil size, and heart rate were all associated with anterior-central brain signals. Together, these results suggest bidirectional causal effects between peripheral autonomic function and central brain circuits involved in the control of arousal.


Assuntos
Encéfalo , Frequência Cardíaca , Humanos , Encéfalo/fisiologia , Feminino , Masculino , Frequência Cardíaca/fisiologia , Adulto , Pupila/fisiologia , Adulto Jovem , Nível de Alerta/fisiologia , Percepção Auditiva/fisiologia , Movimentos Sacádicos/fisiologia , Cognição/fisiologia , Sistema Nervoso Autônomo/fisiologia , Estimulação Acústica
4.
eNeuro ; 11(4)2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38604776

RESUMO

Sensory stimulation is often accompanied by fluctuations at high frequencies (>30 Hz) in brain signals. These could be "narrowband" oscillations in the gamma band (30-70 Hz) or nonoscillatory "broadband" high-gamma (70-150 Hz) activity. Narrowband gamma oscillations, which are induced by presenting some visual stimuli such as gratings and have been shown to weaken with healthy aging and the onset of Alzheimer's disease, hold promise as potential biomarkers. However, since delivering visual stimuli is cumbersome as it requires head stabilization for eye tracking, an equivalent auditory paradigm could be useful. Although simple auditory stimuli have been shown to produce high-gamma activity, whether specific auditory stimuli can also produce narrowband gamma oscillations is unknown. We tested whether auditory ripple stimuli, which are considered an analog to visual gratings, could elicit narrowband oscillations in auditory areas. We recorded 64-channel electroencephalogram from male and female (18 each) subjects while they either fixated on the monitor while passively viewing static visual gratings or listened to stationary and moving ripples, played using loudspeakers, with their eyes open or closed. We found that while visual gratings induced narrowband gamma oscillations with suppression in the alpha band (8-12 Hz), auditory ripples did not produce narrowband gamma but instead elicited very strong broadband high-gamma response and suppression in the beta band (14-26 Hz). Even though we used equivalent stimuli in both modalities, our findings indicate that the underlying neuronal circuitry may not share ubiquitous strategies for stimulus processing.


Assuntos
Estimulação Acústica , Percepção Auditiva , Eletroencefalografia , Ritmo Gama , Humanos , Masculino , Feminino , Ritmo Gama/fisiologia , Adulto , Percepção Auditiva/fisiologia , Adulto Jovem , Estimulação Luminosa/métodos , Percepção Visual/fisiologia
5.
PLoS One ; 19(4): e0301478, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38652721

RESUMO

Groove, or the pleasurable urge to move to music, offers unique insight into the relationship between emotion and action. The predictive coding of music model posits that groove is linked to predictions of music formed over time, with stimuli of moderate complexity rated as most pleasurable and likely to engender movement. At the same time, listeners vary in the pleasure they derive from music listening: individuals with musical anhedonia report reduced pleasure during music listening despite no impairments in music perception and no general anhedonia. Little is known about musical anhedonics' subjective experience of groove. Here we examined the relationship between groove and music reward sensitivity. Participants (n = 287) heard drum-breaks that varied in perceived complexity, and rated each for pleasure and wanting to move. Musical anhedonics (n = 13) had significantly lower ratings compared to controls (n = 13) matched on music perception abilities and general anhedonia. However, both groups demonstrated the classic inverted-U relationship between ratings of pleasure & move and stimulus complexity, with ratings peaking for intermediately complex stimuli. Across our entire sample, pleasure ratings were most strongly related with music reward sensitivity for highly complex stimuli (i.e., there was an interaction between music reward sensitivity and stimulus complexity). Finally, the sensorimotor subscale of music reward was uniquely associated with move, but not pleasure, ratings above and beyond the five other dimensions of musical reward. Results highlight the multidimensional nature of reward sensitivity and suggest that pleasure and wanting to move are driven by overlapping but separable mechanisms.


Assuntos
Anedonia , Percepção Auditiva , Música , Prazer , Recompensa , Humanos , Música/psicologia , Anedonia/fisiologia , Feminino , Masculino , Adulto , Prazer/fisiologia , Adulto Jovem , Percepção Auditiva/fisiologia , Emoções/fisiologia , Adolescente , Estimulação Acústica
6.
PLoS One ; 19(4): e0300219, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38568916

RESUMO

Aphantasia is characterised by the inability to create mental images in one's mind. Studies investigating impairments in imagery typically focus on the visual domain. However, it is possible to generate many different forms of imagery including imagined auditory, kinesthetic, tactile, motor, taste and other experiences. Recent studies show that individuals with aphantasia report a lack of imagery in modalities, other than vision, including audition. However, to date, no research has examined whether these reductions in self-reported auditory imagery are associated with decrements in tasks that require auditory imagery. Understanding the extent to which visual and auditory imagery deficits co-occur can help to better characterise the core deficits of aphantasia and provide an alternative perspective on theoretical debates on the extent to which imagery draws on modality-specific or modality-general processes. In the current study, individuals that self-identified as being aphantasic and matched control participants with typical imagery performed two tasks: a musical pitch-based imagery and voice-based categorisation task. The majority of participants with aphantasia self-reported significant deficits in both auditory and visual imagery. However, we did not find a concomitant decrease in performance on tasks which require auditory imagery, either in the full sample or only when considering those participants that reported significant deficits in both domains. These findings are discussed in relation to the mechanisms that might obscure observation of imagery deficits in auditory imagery tasks in people that report reduced auditory imagery.


Assuntos
Imagens, Psicoterapia , Imaginação , Humanos , Autorrelato , Imagens, Psicoterapia/métodos , Percepção Auditiva
7.
Sci Rep ; 14(1): 7627, 2024 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-38561365

RESUMO

This study aimed to investigate the effects of reproducing an ultrasonic sound above 20 kHz on the subjective impressions of water sounds using psychological and physiological information obtained by the semantic differential method and electroencephalography (EEG), respectively. The results indicated that the ultrasonic component affected the subjective impression of the water sounds. In addition, regarding the relationship between psychological and physiological aspects, a moderate correlation was confirmed between the EEG change rate and subjective impressions. However, no differences in characteristics were found between with and without the ultrasound component, suggesting that ultrasound does not directly affect the relationship between subjective impressions and EEG energy at the current stage. Furthermore, the correlations calculated for the left and right channels in the occipital region differed significantly, which suggests functional asymmetry for sound perception between the right and left hemispheres.


Assuntos
Audição , Som , Eletroencefalografia/métodos , Percepção Auditiva/fisiologia , Estimulação Acústica
8.
Sci Rep ; 14(1): 7764, 2024 04 02.
Artigo em Inglês | MEDLINE | ID: mdl-38565622

RESUMO

Sound is sensed by the ear but can also be felt on the skin, by means of vibrotactile stimulation. Only little research has addressed perceptual implications of vibrotactile stimulation in the realm of music. Here, we studied which perceptual dimensions of music listening are affected by vibrotactile stimulation and whether the spatial segregation of vibrations improves vibrotactile stimulation. Forty-one listeners were presented with vibrotactile stimuli via a chair's surfaces (left and right arm rests, back rest, seat) in addition to music presented over headphones. Vibrations for each surface were derived from individual tracks of the music (multi condition) or conjointly by a mono-rendering, in addition to incongruent and headphones-only conditions. Listeners evaluated unknown music from popular genres according to valence, arousal, groove, the feeling of being part of a live performance, the feeling of being part of the music, and liking. Results indicated that the multi- and mono vibration conditions robustly enhanced the nature of the musical experience compared to listening via headphones alone. Vibrotactile enhancement was strong in the latent dimension of 'musical engagement', encompassing the sense of being a part of the music, arousal, and groove. These findings highlight the potential of vibrotactile cues for creating intensive musical experiences.


Assuntos
Música , Som , Vibração , Emoções , Sinais (Psicologia) , Percepção Auditiva/fisiologia
9.
JASA Express Lett ; 4(4)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38662119

RESUMO

This study presents a dataset of audio-visual soundscape recordings at 62 different locations in Singapore, initially made as full-length recordings over spans of 9-38 min. For consistency and reduction in listener fatigue in future subjective studies, one-minute excerpts were cropped from the full-length recordings. An automated method using pre-trained models for Pleasantness and Eventfulness (according to ISO 12913) in a modified partitioning around medoids algorithm was employed to generate the set of excerpts by balancing the need to encompass the perceptual space with uniformity in distribution. A validation study on the method confirmed its adherence to the intended design.


Assuntos
Percepção Auditiva , Singapura , Humanos , Percepção Auditiva/fisiologia , Algoritmos , Som
10.
Cereb Cortex ; 34(4)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38629796

RESUMO

Neuroimaging studies have shown that the neural representation of imagery is closely related to the perception modality; however, the undeniable different experiences between perception and imagery indicate that there are obvious neural mechanism differences between them, which cannot be explained by the simple theory that imagery is a form of weak perception. Considering the importance of functional integration of brain regions in neural activities, we conducted correlation analysis of neural activity in brain regions jointly activated by auditory imagery and perception, and then brain functional connectivity (FC) networks were obtained with a consistent structure. However, the connection values between the areas in the superior temporal gyrus and the right precentral cortex were significantly higher in auditory perception than in the imagery modality. In addition, the modality decoding based on FC patterns showed that the FC network of auditory imagery and perception can be significantly distinguishable. Subsequently, voxel-level FC analysis further verified the distribution regions of voxels with significant connectivity differences between the 2 modalities. This study complemented the correlation and difference between auditory imagery and perception in terms of brain information interaction, and it provided a new perspective for investigating the neural mechanisms of different modal information representations.


Assuntos
Córtex Auditivo , Mapeamento Encefálico , Mapeamento Encefálico/métodos , Imaginação , Encéfalo/diagnóstico por imagem , Percepção Auditiva , Córtex Cerebral , Imageamento por Ressonância Magnética/métodos , Córtex Auditivo/diagnóstico por imagem
11.
Sci Rep ; 14(1): 8814, 2024 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627479

RESUMO

Rhythm perception and synchronisation is musical ability with neural basis defined as the ability to perceive rhythm in music and synchronise body movements with it. The study aimed to check the errors of synchronisation and physiological response as a reaction of the subjects to metrorhythmic stimuli of synchronous and pseudosynchronous stimulation (synchronisation with an externally controlled rhythm, but in reality controlled or produced tone by tapping) Nineteen subjects without diagnosed motor disorders participated in the study. Two tests were performed, where the electromyography signal and reaction time were recorded using the NORAXON system. In addition, physiological signals such as electrodermal activity and blood volume pulse were measured using the Empatica E4. Study 1 consisted of adapting the finger tapping test in pseudosynchrony with a given metrorhythmic stimulus with a selection of preferred, choices of decreasing and increasing tempo. Study 2 consisted of metrorhythmic synchronisation during the heel stomping test. Numerous correlations and statistically significant parameters were found between the response of the subjects with respect to their musical education, musical and sports activities. Most of the differentiating characteristics shown evidence of some group division in the undertaking of musical activities. The use of detailed analyses of synchronisation errors can contribute to the development of methods to improve the rehabilitation process of subjects with motor dysfunction, and this will contribute to the development of an expert system that considers personalised musical preferences.


Assuntos
Música , Esportes , Humanos , Movimento/fisiologia , Tempo de Reação , Percepção Auditiva/fisiologia , Estimulação Acústica
12.
Sci Rep ; 14(1): 8739, 2024 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627572

RESUMO

Inspired by recent findings in the visual domain, we investigated whether the stimulus-evoked pupil dilation reflects temporal statistical regularities in sequences of auditory stimuli. We conducted two preregistered pupillometry experiments (experiment 1, n = 30, 21 females; experiment 2, n = 31, 22 females). In both experiments, human participants listened to sequences of spoken vowels in two conditions. In the first condition, the stimuli were presented in a random order and, in the second condition, the same stimuli were presented in a sequence structured in pairs. The second experiment replicated the first experiment with a modified timing and number of stimuli presented and without participants being informed about any sequence structure. The sound-evoked pupil dilation during a subsequent familiarity task indicated that participants learned the auditory vowel pairs of the structured condition. However, pupil diameter during the structured sequence did not differ according to the statistical regularity of the pair structure. This contrasts with similar visual studies, emphasizing the susceptibility of pupil effects during statistically structured sequences to experimental design settings in the auditory domain. In sum, our findings suggest that pupil diameter may serve as an indicator of sound pair familiarity but does not invariably respond to task-irrelevant transition probabilities of auditory sequences.


Assuntos
Pupila , Som , Feminino , Humanos , Pupila/fisiologia , Reconhecimento Psicológico , Percepção Auditiva/fisiologia
13.
J Acoust Soc Am ; 155(4): 2756-2768, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38662607

RESUMO

In the current study on soundscape, the distinction between felt emotion and perceived emotion in soundscape measurement has not been addressed as much as that in music studies. This research was conducted to investigate perceived and felt emotions associated with soundscape evaluation in urban open spaces through a laboratory audio-visual experiment using photographs and binaural recordings of 16 urban open locations across Harbin, China. In total, 46 participants were required to assess both the "perceived emotion" and "felt emotion" of the soundscapes using a questionnaire (in Chinese). First, five felt emotions and seven perceived emotions associated with the soundscape were identified, among which the dominant factors were enjoyment and excitement for felt emotions and comfortable and festive for perceived emotions. Second, when comparing perceived and felt emotions, the holistic soundscape descriptor "preference" is more suitable for predicting through felt emotion, while the holistic soundscape descriptor "appropriateness" is more suitable for predicting through perceived emotion. Third, preference is a more stringent soundscape descriptor than appropriateness, indicating a higher level of requirement in its definition. Meanwhile, preference is a more emotional soundscape descriptor than appropriateness. It may be inferred that for evaluating soundscapes, the more emotional the descriptor, the greater its stringency.


Assuntos
Percepção Auditiva , Emoções , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , China , Inquéritos e Questionários , Estimulação Acústica , Ruído/efeitos adversos , Som
14.
Anim Cogn ; 27(1): 17, 2024 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-38429431

RESUMO

A central feature in music is the hierarchical organization of its components. Musical pieces are not a simple concatenation of chords, but are characterized by rhythmic and harmonic structures. Here, we explore if sensitivity to music structure might emerge in the absence of any experience with musical stimuli. For this, we tested if rats detect the difference between structured and unstructured musical excerpts and compared their performance with that of humans. Structured melodies were excerpts of Mozart's sonatas. Unstructured melodies were created by the recombination of fragments of different sonatas. We trained listeners (both human participants and Long-Evans rats) with a set of structured and unstructured excerpts, and tested them with completely novel excerpts they had not heard before. After hundreds of training trials, rats were able to tell apart novel structured from unstructured melodies. Human listeners required only a few trials to reach better performance than rats. Interestingly, such performance was increased in humans when tonality changes were included, while it decreased to chance in rats. Our results suggest that, with enough training, rats might learn to discriminate acoustic differences differentiating hierarchical music structures from unstructured excerpts. More importantly, the results point toward species-specific adaptations on how tonality is processed.


Assuntos
Percepção Auditiva , Humanos , Ratos , Animais , Ratos Long-Evans
15.
Anim Cogn ; 27(1): 8, 2024 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-38429588

RESUMO

Predation risk may affect the foraging behavior of birds. However, there has been little research on the ability of domestic birds to perceive predation risk and thus adjust their feeding behavior. In this study, we tested whether domestic budgerigars (Melopsittacus undulatus) perceived predation risk after the presentation of specimens and sounds of sparrowhawks (Accipiter nisus), domestic cats (Felis catus), and humans, and whether this in turn influenced their feeding behavior. When exposed to visual or acoustic stimuli, budgerigars showed significantly longer latency to feed under sparrowhawk, domestic cat, and human treatments than with controls. Budgerigars responded more strongly to acoustic stimuli than visual stimuli, and they showed the longest latency to feed and the least number of feeding times in response to sparrowhawk calls. Moreover, budgerigars showed shorter latency to feed and greater numbers of feeding times in response to human voices than to sparrowhawk or domestic cat calls. Our results suggest that domestic budgerigars may identify predation risk through visual or acoustic signals and adjust their feeding behavior accordingly.


Assuntos
Percepção Auditiva , Melopsittacus , Humanos , Animais , Gatos , Percepção Auditiva/fisiologia , Melopsittacus/fisiologia , Comportamento Predatório , Acústica , Som
16.
J Acoust Soc Am ; 155(3): 1895-1908, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38456732

RESUMO

Humans rely on auditory feedback to monitor and adjust their speech for clarity. Cochlear implants (CIs) have helped over a million people restore access to auditory feedback, which significantly improves speech production. However, there is substantial variability in outcomes. This study investigates the extent to which CI users can use their auditory feedback to detect self-produced sensory errors and make adjustments to their speech, given the coarse spectral resolution provided by their implants. First, we used an auditory discrimination task to assess the sensitivity of CI users to small differences in formant frequencies of their self-produced vowels. Then, CI users produced words with altered auditory feedback in order to assess sensorimotor adaptation to auditory error. Almost half of the CI users tested can detect small, within-channel differences in their self-produced vowels, and they can utilize this auditory feedback towards speech adaptation. An acoustic hearing control group showed better sensitivity to the shifts in vowels, even in CI-simulated speech, and elicited more robust speech adaptation behavior than the CI users. Nevertheless, this study confirms that CI users can compensate for sensory errors in their speech and supports the idea that sensitivity to these errors may relate to variability in production.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Humanos , Percepção Auditiva , Fala
17.
Elife ; 122024 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-38470243

RESUMO

Preserved communication abilities promote healthy ageing. To this end, the age-typical loss of sensory acuity might in part be compensated for by an individual's preserved attentional neural filtering. Is such a compensatory brain-behaviour link longitudinally stable? Can it predict individual change in listening behaviour? We here show that individual listening behaviour and neural filtering ability follow largely independent developmental trajectories modelling electroencephalographic and behavioural data of N = 105 ageing individuals (39-82 y). First, despite the expected decline in hearing-threshold-derived sensory acuity, listening-task performance proved stable over 2 y. Second, neural filtering and behaviour were correlated only within each separate measurement timepoint (T1, T2). Longitudinally, however, our results raise caution on attention-guided neural filtering metrics as predictors of individual trajectories in listening behaviour: neither neural filtering at T1 nor its 2-year change could predict individual 2-year behavioural change, under a combination of modelling strategies.


Humans are social animals. Communicating with other humans is vital for our social wellbeing, and having strong connections with others has been associated with healthier aging. For most humans, speech is an integral part of communication, but speech comprehension can be challenging in everyday social settings: imagine trying to follow a conversation in a crowded restaurant or decipher an announcement in a busy train station. Noisy environments are particularly difficult to navigate for older individuals, since age-related hearing loss can impact the ability to detect and distinguish speech sounds. Some aging individuals cope better than others with this problem, but the reason why, and how listening success can change over a lifetime, is poorly understood. One of the mechanisms involved in the segregation of speech from other sounds depends on the brain applying a 'neural filter' to auditory signals. The brain does this by aligning the activity of neurons in a part of the brain that deals with sounds, the auditory cortex, with fluctuations in the speech signal of interest. This neural 'speech tracking' can help the brain better encode the speech signals that a person is listening to. Tune and Obleser wanted to know whether the accuracy with which individuals can implement this filtering strategy represents a marker of listening success. Further, the researchers wanted to answer whether differences in the strength of the neural filtering observed between aging listeners could predict how their listening ability would develop, and determine whether these neural changes were connected with changes in people's behaviours. To answer these questions, Tune and Obleser used data collected from a group of healthy middle-aged and older listeners twice, two years apart. They then built mathematical models using these data to investigate how differences between individuals in the brain and in behaviours relate to each other. The researchers found that, across both timepoints, individuals with stronger neural filtering were better at distinguishing speech and listening. However, neural filtering strength measured at the first timepoint was not a good predictor of how well individuals would be able to listen two years later. Indeed, changes at the brain and the behavioural level occurred independently of each other. Tune and Obleser's findings will be relevant to neuroscientists, as well as to psychologists and audiologists whose goal is to understand differences between individuals in terms of listening success. The results suggest that neural filtering guided by attention to speech is an important readout of an individual's attention state. However, the results also caution against explaining listening performance based solely on neural factors, given that listening behaviours and neural filtering follow independent trajectories.


Assuntos
Envelhecimento , Longevidade , Adulto , Humanos , Encéfalo , Percepção Auditiva , Benchmarking
18.
Cortex ; 174: 137-148, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38547812

RESUMO

Attention is not constant but rather fluctuates over time and these attentional fluctuations may prioritize the processing of certain events over others. In music listening, the pleasurable urge to move to music (termed 'groove' by music psychologists) offers a particularly convenient case study of oscillatory attention because it engenders synchronous and oscillatory movements which also vary predictably with stimulus complexity. In this study, we simultaneously recorded pupillometry and scalp electroencephalography (EEG) from participants while they listened to drumbeats of varying complexity that they rated in terms of groove afterwards. Using the intertrial phase coherence of the beat frequency, we found that while subjects were listening, their pupil activity became entrained to the beat of the drumbeats and this entrained attention persisted in the EEG even as subjects imagined the drumbeats continuing through subsequent silent periods. This entrainment in both the pupillometry and EEG worsened with increasing rhythmic complexity, indicating poorer sensory precision as the beat became more obscured. Additionally, sustained pupil dilations revealed the expected, inverted U-shaped relationship between rhythmic complexity and groove ratings. Taken together, this work bridges oscillatory attention to rhythmic complexity in relation to musical groove.


Assuntos
Percepção Auditiva , Música , Humanos , Estimulação Acústica , Eletroencefalografia , Periodicidade , Movimento
19.
Int J Psychophysiol ; 199: 112328, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38458383

RESUMO

According to the arousal-mood hypothesis, changes in arousal and mood when exposed to auditory stimulation underlie the detrimental effects or improvements in cognitive performance. Findings supporting or against this hypothesis are, however, often based on subjective ratings of arousal rather than autonomic/physiological indices of arousal. To assess the arousal-mood hypothesis, we carried out a systematic review of the literature on 31 studies investigating cardiac, electrodermal, and pupillometry measures when exposed to different types of auditory stimulation (music, ambient noise, white noise, and binaural beats) in relation to cognitive performance. Our review suggests that the effects of music, noise, or binaural beats on cardiac, electrodermal, and pupillometry measures in relation to cognitive performance are either mixed or insufficient to draw conclusions. Importantly, the evidence for or against the arousal-mood hypothesis is at best indirect because autonomic arousal and cognitive performance are often considered separately. Future research is needed to directly evaluate the effects of auditory stimulation on autonomic arousal and cognitive performance holistically.


Assuntos
Música , Humanos , Estimulação Acústica , Música/psicologia , Nível de Alerta/fisiologia , Atenção , Cognição , Percepção Auditiva/fisiologia
20.
Cortex ; 174: 1-18, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38484435

RESUMO

Hearing-in-noise (HIN) ability is crucial in speech and music communication. Recent evidence suggests that absolute pitch (AP), the ability to identify isolated musical notes, is associated with HIN benefits. A theoretical account postulates a link between AP ability and neural network indices of segregation. However, how AP ability modulates the brain activation and functional connectivity underlying HIN perception remains unclear. Here we used functional magnetic resonance imaging to contrast brain responses among a sample (n = 45) comprising 15 AP musicians, 15 non-AP musicians, and 15 non-musicians in perceiving Mandarin speech and melody targets under varying signal-to-noise ratios (SNRs: No-Noise, 0, -9 dB). Results reveal that AP musicians exhibited increased activation in auditory and superior frontal regions across both HIN domains (music and speech), irrespective of noise levels. Notably, substantially higher sensorimotor activation was found in AP musicians when the target was music compared to speech. Furthermore, we examined AP effects on neural connectivity using psychophysiological interaction analysis with the auditory cortex as the seed region. AP musicians showed decreased functional connectivity with the sensorimotor and middle frontal gyrus compared to non-AP musicians. Crucially, AP differentially affected connectivity with parietal and frontal brain regions depending on the HIN domain being music or speech. These findings suggest that AP plays a critical role in HIN perception, manifested by increased activation and functional independence between auditory and sensorimotor regions for perceiving music and speech streams.


Assuntos
Córtex Auditivo , Música , Percepção da Fala , Humanos , Encéfalo/fisiologia , Percepção Auditiva/fisiologia , Audição , Córtex Auditivo/fisiologia , Mapeamento Encefálico , Percepção da Fala/fisiologia , Percepção da Altura Sonora/fisiologia , Estimulação Acústica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...